killer.sh 문항2
CKA Simulator B Kubernetes 1.32
Each question needs to be solved on a specific instance. Use SSH to connect to the correct instance shown before each question.
Question 1 | DNS / FQDN / Headless Service
Solve this question on: ssh cka6016
Update the ConfigMap used by the Deployment controller in Namespace lima-control with the correct FQDN values for:
- DNS_1: Service kubernetes in Namespace default
- DNS_2: Headless Service department in Namespace lima-workload
- DNS_3: Pod section100 in Namespace lima-workload (should work even if Pod IP changes)
- DNS_4: A Pod with IP 1.2.3.4 in Namespace kube-system
Solution:
Connect to the pod and test DNS queries:
# Connect to the controller pod
k -n lima-control exec -it controller-586d6657-gdmch -- sh
# Test DNS queries
nslookup kubernetes.default.svc.cluster.local
nslookup department.lima-workload.svc.cluster.local
nslookup section100.section.lima-workload.svc.cluster.local
nslookup 1-2-3-4.kube-system.pod.cluster.local
Update the ConfigMap:
k -n lima-control edit cm control-config
apiVersion: v1
data:
DNS_1: kubernetes.default.svc.cluster.local
DNS_2: department.lima-workload.svc.cluster.local
DNS_3: section100.section.lima-workload.svc.cluster.local
DNS_4: 1-2-3-4.kube-system.pod.cluster.local
Restart the deployment:
kubectl -n lima-control rollout restart deploy controller
Question 2 | Create a Static Pod and Service
Solve this question on: ssh cka2560
Create a Static Pod named my-static-pod in Namespace default on the controlplane node with image nginx:1-alpine and resource requests for 10m CPU and 20Mi memory.
Create a NodePort Service named static-pod-service which exposes that static Pod on port 80.
Solution:
# Become root
sudo -i
# Go to the static pod directory
cd /etc/kubernetes/manifests/
# Create static pod manifest
k run my-static-pod --image=nginx:1-alpine -o yaml --dry-run=client > my-static-pod.yaml
Edit the pod manifest to add resource requests:
apiVersion: v1
kind: Pod
metadata:
labels:
run: my-static-pod
name: my-static-pod
spec:
containers:
- image: nginx:1-alpine
name: my-static-pod
resources:
requests:
cpu: 10m
memory: 20Mi
Wait for the pod to start and create the service:
k expose pod my-static-pod-cka2560 --name static-pod-service --type=NodePort --port 80
Verify:
k get svc,ep -l run=my-static-pod
Question 3 | Kubelet client/server cert info
Solve this question on: ssh cka5248
Find the Issuer and Extended Key Usage values on cka5248-node1 for:
- Kubelet Client Certificate
- Kubelet Server Certificate
Write the information into file /opt/course/3/certificate-info.txt.
Solution:
# Connect to the worker node
ssh cka5248-node1
sudo -i
# Find certificates
find /var/lib/kubelet/pki
# Check client certificate
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1
# Check server certificate
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1
Write to the file:
Issuer: CN = kubernetes
X509v3 Extended Key Usage: TLS Web Client Authentication
Issuer: CN = cka5248-node1-ca@1730211854
X509v3 Extended Key Usage: TLS Web Server Authentication
Question 4 | Pod Ready if Service is reachable
Solve this question on: ssh cka3200
Create a Pod named ready-if-service-ready of image nginx:1-alpine with:
- A LivenessProbe that executes command true
- A ReadinessProbe that checks if http://service-am-i-ready:80 is reachable
Then create a second Pod named am-i-ready of image nginx:1-alpine with label id: cross-server-ready.
Solution:
First pod with probes:
k run ready-if-service-ready --image=nginx:1-alpine --dry-run=client -o yaml > 4_pod1.yaml
Edit the manifest:
apiVersion: v1
kind: Pod
metadata:
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1-alpine
name: ready-if-service-ready
livenessProbe:
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80'
Create the first pod:
k -f 4_pod1.yaml create
Create the second pod with labels for the service:
k run am-i-ready --image=nginx:1-alpine --labels="id=cross-server-ready"
Verify:
k get pod ready-if-service-ready
k describe svc service-am-i-ready
Question 5 | Kubectl sorting
Solve this question on: ssh cka8448
Create two bash scripts:
/opt/course/5/find_pods.sh- lists all Pods sorted by AGE/opt/course/5/find_pods_uid.sh- lists all Pods sorted by metadata.uid
Solution:
# Create first script
cat > /opt/course/5/find_pods.sh << EOF
kubectl get pod -A --sort-by=.metadata.creationTimestamp
EOF
# Create second script
cat > /opt/course/5/find_pods_uid.sh << EOF
kubectl get pod -A --sort-by=.metadata.uid
EOF
# Make them executable
chmod +x /opt/course/5/find_pods.sh
chmod +x /opt/course/5/find_pods_uid.sh
Question 6 | Fix Kubelet
Solve this question on: ssh cka1024
Fix the kubelet on controlplane node cka1024 and confirm it's available in Ready state. Then create a Pod called success in default Namespace of image nginx:1-alpine.
Solution:
# Check service status
sudo -i
service kubelet status
# Find the issue in service config
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Correct the binary path in the config file:
# Change from:
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS...
# To:
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS...
Restart the kubelet:
systemctl daemon-reload
service kubelet restart
service kubelet status
Create the pod:
k run success --image nginx:1-alpine
Question 7 | Etcd Operations
Solve this question on: ssh cka2560
Perform etcd operations:
- Get etcd version and store at
/opt/course/7/etcd-version - Make a snapshot of etcd at
/opt/course/7/etcd-snapshot.db
Solution:
# Get etcd version
sudo -i
k -n kube-system exec etcd-cka2560 -- etcd --version > /opt/course/7/etcd-version
# Create snapshot
ETCDCTL_API=3 etcdctl snapshot save /opt/course/7/etcd-snapshot.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
Question 8 | Get Controlplane Information
Solve this question on: ssh cka8448
Check how controlplane components are started/installed and the DNS application. Write findings to /opt/course/8/controlplane-components.txt.
Solution:
# Check for systemd services
sudo -i
find /usr/lib/systemd | grep kube
service kubelet status
# Check for static pods
find /etc/kubernetes/manifests/
# Check DNS implementation
k -n kube-system get deploy
Write to file:
# /opt/course/8/controlplane-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns
Question 9 | Kill Scheduler, Manual Scheduling
Solve this question on: ssh cka5248
Temporarily stop the kube-scheduler, manually create and schedule a Pod, then restart the scheduler.
Solution:
Stop the scheduler:
sudo -i
cd /etc/kubernetes/manifests/
mv kube-scheduler.yaml ..
Create unscheduled pod:
k run manual-schedule --image=httpd:2-alpine
Manually schedule the pod:
k get pod manual-schedule -o yaml > 9.yaml
Edit to add nodeName:
spec:
nodeName: cka5248 # Add the controlplane node name
Apply the changes:
k -f 9.yaml replace --force
Restart the scheduler:
cd /etc/kubernetes/manifests/
mv ../kube-scheduler.yaml .
Create a second pod to verify:
k run manual-schedule2 --image=httpd:2-alpine
Question 10 | PV PVC Dynamic Provisioning
Solve this question on: ssh cka6016
Create a StorageClass named local-backup with provisioner rancher.io/local-path and volumeBindingMode WaitForFirstConsumer, with retained PVs. Adjust the Job at /opt/course/10/backup.yaml to use a PVC with 50Mi storage using this StorageClass.
Solution:
Create StorageClass:
cat > sc.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-backup
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
k -f sc.yaml apply
Modify the backup job:
cd /opt/course/10
cp backup.yaml backup.yaml_ori
vim backup.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-pvc
namespace: project-bern
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
storageClassName: local-backup
---
apiVersion: batch/v1
kind: Job
metadata:
name: backup
namespace: project-bern
spec:
backoffLimit: 0
template:
spec:
volumes:
- name: backup
persistentVolumeClaim:
claimName: backup-pvc
containers:
- name: bash
image: bash:5
command:
- bash
- -c
- |
set -x
touch /backup/backup-$(date +%Y-%m-%d-%H-%M-%S).tar.gz
sleep 15
volumeMounts:
- name: backup
mountPath: /backup
restartPolicy: Never
Apply and verify:
k delete -f backup.yaml
k apply -f backup.yaml
k -n project-bern get job,pod,pvc,pv
Question 11 | Create Secret and mount into Pod
Solve this question on: ssh cka2560
Create a Namespace secret with a Pod that mounts one Secret as a volume and another as environment variables.
Solution:
# Create namespace
k create ns secret
# Configure first secret
cp /opt/course/11/secret1.yaml 11_secret1.yaml
# Update namespace in the file
k -f 11_secret1.yaml create
# Create second secret
k -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234
# Create pod
k -n secret run secret-pod --image=busybox:1 --dry-run=client -o yaml -- sh -c "sleep 1d" > 11.yaml
Edit the pod manifest:
apiVersion: v1
kind: Pod
metadata:
labels:
run: secret-pod
name: secret-pod
namespace: secret
spec:
containers:
- args:
- sh
- -c
- sleep 1d
image: busybox:1
name: secret-pod
env:
- name: APP_USER
valueFrom:
secretKeyRef:
name: secret2
key: user
- name: APP_PASS
valueFrom:
secretKeyRef:
name: secret2
key: pass
volumeMounts:
- name: secret1
mountPath: /tmp/secret1
readOnly: true
volumes:
- name: secret1
secret:
secretName: secret1
Apply and verify:
k -f 11.yaml create
k -n secret exec secret-pod -- env | grep APP
k -n secret exec secret-pod -- find /tmp/secret1
Question 12 | Schedule Pod on Controlplane Nodes
Solve this question on: ssh cka5248
Create a Pod that runs only on controlplane nodes without adding new labels.
Solution:
# Create pod template
k run pod1 --image=httpd:2-alpine --dry-run=client -o yaml > 12.yaml
Edit the pod manifest:
apiVersion: v1
kind: Pod
metadata:
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2-alpine
name: pod1-container
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
nodeSelector:
node-role.kubernetes.io/control-plane: ""
Apply and verify:
k -f 12.yaml create
k get pod pod1 -o wide
Question 13 | Multi Containers and Pod shared Volume
Solve this question on: ssh cka3200
Create a Pod with multiple containers sharing a volume:
Solution:
k run multi-container-playground --image=nginx:1-alpine --dry-run=client -o yaml > 13.yaml
Edit the pod manifest:
apiVersion: v1
kind: Pod
metadata:
labels:
run: multi-container-playground
name: multi-container-playground
spec:
containers:
- image: nginx:1-alpine
name: c1
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: vol
mountPath: /vol
- image: busybox:1
name: c2
command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]
volumeMounts:
- name: vol
mountPath: /vol
- image: busybox:1
name: c3
command: ["sh", "-c", "tail -f /vol/date.log"]
volumeMounts:
- name: vol
mountPath: /vol
volumes:
- name: vol
emptyDir: {}
Apply and verify:
k -f 13.yaml create
k logs multi-container-playground -c c3
Question 14 | Find out Cluster Information
Solve this question on: ssh cka8448
Gather cluster information and write to /opt/course/14/cluster-info.
Solution:
# Check nodes
k get node
# Check service CIDR
sudo -i
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
# Check networking configuration
find /etc/cni/net.d/
cat /etc/cni/net.d/10-weave.conflist
Write to file:
# /opt/course/14/cluster-info
1: 1
2: 0
3: 10.96.0.0/12
4: Weave, /etc/cni/net.d/10-weave.conflist
5: -cka8448
Question 15 | Cluster Event Logging
Solve this question on: ssh cka6016
Write command to show cluster events, delete kube-proxy Pod, and kill containerd container.
Solution:
# Create event script
echo 'kubectl get events -A --sort-by=.metadata.creationTimestamp' > /opt/course/15/cluster_events.sh
# Delete kube-proxy pod
k -n kube-system get pod -l k8s-app=kube-proxy -owide
k -n kube-system delete pod kube-proxy-lf2fs
sh /opt/course/15/cluster_events.sh > /opt/course/15/pod_kill.log
# Kill container
sudo -i
crictl ps | grep kube-proxy
crictl rm --force 2fd052f1fcf78
sh /opt/course/15/cluster_events.sh > /opt/course/15/container_kill.log
Question 16 | Namespaces and Api Resources
Solve this question on: ssh cka3200
Find namespaced resources and the namespace with most roles.
Solution:
# Find namespaced resources
k api-resources --namespaced -o name > /opt/course/16/resources.txt
# Find namespace with most roles
k -n project-jinan get role --no-headers | wc -l
k -n project-miami get role --no-headers | wc -l
k -n project-melbourne get role --no-headers | wc -l
k -n project-seoul get role --no-headers | wc -l
k -n project-toronto get role --no-headers | wc -l
# Write to file
echo "project-miami with 300 roles" > /opt/course/16/crowded-namespace.txt
Question 17 | Operator, CRDs, RBAC, Kustomize
Solve this question on: ssh cka6016
Fix RBAC permissions and add a new Student resource.
Solution:
# Check logs for issues
cd /opt/course/17/operator
k -n operator-prod logs operator-7f4f58d4d9-v6ftw
# Update RBAC permissions
vim base/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: operator-role
namespace: default
rules:
- apiGroups:
- education.killer.sh
resources:
- students
- classes
verbs:
- list
# Add new student
vim base/students.yaml
apiVersion: education.killer.sh/v1
kind: Student
metadata:
name: student4
spec:
name: Some Name
description: Some Description
# Apply changes
kubectl kustomize /opt/course/17/operator/prod | kubectl apply -f -
CKA Tips
Terminal Efficiency
# Useful aliases
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
# Fast access to history
history | grep <command>
Ctrl+r # Reverse search
# Fast pod deletion
k delete pod mypod --grace-period 0 --force
Vim Tips
# Create .vimrc
cat > ~/.vimrc << EOF
set tabstop=2
set expandtab
set shiftwidth=2
EOF
# Line numbers
:set number # Show line numbers
:set nonumber # Hide line numbers
# Copy/paste in vim
# Mark lines: Esc+V (then arrow keys)
# Copy marked: y
# Cut marked: d
# Paste: p or P
Remember that in the actual exam, you'll be connecting to instances via SSH, so focus on quick command execution rather than elaborate setups.